Language models have shown widespread applications across various fields, from education to legal consulting, and even in predicting medical risks. However, as these models gain more weight in decision-making processes, they may unintentionally reflect the biases present in the human training data, exacerbating discrimination against minority groups. Research has found that language models exhibit implicit racism, particularly in their treatment of African American English (AAE), demonstrating harmful dialect discrimination that is more negative than any known stereotypes against African Americans. The 'masking disguise' method was used to compare AAE with Standard American English.